Skip to content

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github May 15, 2021

Bumps pytorch-lightning from 1.0.3 to 1.3.1.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.3.1] - 2021-05-11

Fixed

  • Fixed DeepSpeed with IterableDatasets (#7362)
  • Fixed Trainer.current_epoch not getting restored after tuning (#7434)
  • Fixed local rank displayed in console log (#7395)

Contributors

@​akihironitta @​awaelchli @​leezu

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Lightning CLI, PyTorch Profiler, Improved Early Stopping

[1.3.0] - 2021-05-06

Added

  • Added support for the EarlyStopping callback to run at the end of the training epoch (#6944)
  • Added synchronization points before and after setup hooks are run (#7202)
  • Added a teardown hook to ClusterEnvironment (#6942)
  • Added utils for metrics to scalar conversions (#7180)
  • Added utils for NaN/Inf detection for gradients and parameters (#6834)
  • Added more explicit exception message when trying to execute trainer.test() or trainer.validate() with fast_dev_run=True (#6667)
  • Added LightningCLI class to provide simple reproducibility with minimum boilerplate training CLI (#4492, #6862, #7156, #7299)
  • Added gradient_clip_algorithm argument to Trainer for gradient clipping by value (#6123).
  • Added a way to print to terminal without breaking up the progress bar (#5470)
  • Added support to checkpoint after training steps in ModelCheckpoint callback (#6146)
  • Added TrainerStatus.{INITIALIZING,RUNNING,FINISHED,INTERRUPTED} (#7173)
  • Added Trainer.validate() method to perform one evaluation epoch over the validation set (#4948)
  • Added LightningEnvironment for Lightning-specific DDP (#5915)
  • Added teardown() hook to LightningDataModule (#4673)
  • Added auto_insert_metric_name parameter to ModelCheckpoint (#6277)
  • Added arg to self.log that enables users to give custom names when dealing with multiple dataloaders (#6274)
  • Added teardown method to BaseProfiler to enable subclasses defining post-profiling steps outside of __del__ (#6370)
  • Added setup method to BaseProfiler to enable subclasses defining pre-profiling steps for every process (#6633)
  • Added no return warning to predict (#6139)
  • Added Trainer.predict config validation (#6543)
  • Added AbstractProfiler interface (#6621)
  • Added support for including module names for forward in the autograd trace of PyTorchProfiler (#6349)
  • Added support for the PyTorch 1.8.1 autograd profiler (#6618)
  • Added outputs parameter to callback's on_validation_epoch_end & on_test_epoch_end hooks (#6120)
  • Added configure_sharded_model hook (#6679)
  • Added support for precision=64, enabling training with double precision (#6595)
  • Added support for DDP communication hooks (#6736)
  • Added artifact_location argument to MLFlowLogger which will be passed to the MlflowClient.create_experiment call (#6677)
  • Added model parameter to precision plugins' clip_gradients signature (#6764, #7231)
  • Added is_last_batch attribute to Trainer (#6825)

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.3.1] - 2021-05-11

Fixed

  • Fixed DeepSpeed with IterableDatasets (#7362)
  • Fixed Trainer.current_epoch not getting restored after tuning (#7434)
  • Fixed local rank displayed in console log (#7395)

[1.3.0] - 2021-05-06

Added

  • Added support for the EarlyStopping callback to run at the end of the training epoch (#6944)
  • Added synchronization points before and after setup hooks are run (#7202)
  • Added a teardown hook to ClusterEnvironment (#6942)
  • Added utils for metrics to scalar conversions (#7180)
  • Added utils for NaN/Inf detection for gradients and parameters (#6834)
  • Added more explicit exception message when trying to execute trainer.test() or trainer.validate() with fast_dev_run=True (#6667)
  • Added LightningCLI class to provide simple reproducibility with minimum boilerplate training CLI ( #4492, #6862, #7156, #7299)
  • Added gradient_clip_algorithm argument to Trainer for gradient clipping by value (#6123).
  • Added a way to print to terminal without breaking up the progress bar (#5470)
  • Added support to checkpoint after training steps in ModelCheckpoint callback (#6146)
  • Added TrainerStatus.{INITIALIZING,RUNNING,FINISHED,INTERRUPTED} (#7173)
  • Added Trainer.validate() method to perform one evaluation epoch over the validation set (#4948)
  • Added LightningEnvironment for Lightning-specific DDP (#5915)
  • Added teardown() hook to LightningDataModule (#4673)
  • Added auto_insert_metric_name parameter to ModelCheckpoint (#6277)
  • Added arg to self.log that enables users to give custom names when dealing with multiple dataloaders (#6274)
  • Added teardown method to BaseProfiler to enable subclasses defining post-profiling steps outside of __del__ (#6370)
  • Added setup method to BaseProfiler to enable subclasses defining pre-profiling steps for every process (#6633)
  • Added no return warning to predict (#6139)
  • Added Trainer.predict config validation (#6543)
  • Added AbstractProfiler interface (#6621)
  • Added support for including module names for forward in the autograd trace of PyTorchProfiler (#6349)
  • Added support for the PyTorch 1.8.1 autograd profiler (#6618)
  • Added outputs parameter to callback's on_validation_epoch_end & on_test_epoch_end hooks (#6120)
  • Added configure_sharded_model hook (#6679)
  • Added support for precision=64, enabling training with double precision (#6595)
  • Added support for DDP communication hooks (#6736)
  • Added artifact_location argument to MLFlowLogger which will be passed to the MlflowClient.create_experiment call (#6677)
  • Added model parameter to precision plugins' clip_gradients signature ( #6764, #7231)
  • Added is_last_batch attribute to Trainer (#6825)
  • Added LightningModule.lr_schedulers() for manual optimization (#6567)

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label May 15, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github May 22, 2021

Superseded by #22.

@dependabot dependabot bot closed this May 22, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/pytorch-lightning-1.3.1 branch May 22, 2021 07:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant